Backoff inspired features for maximum entropy language models
نویسندگان
چکیده
Maximum Entropy (MaxEnt) language models [1, 2] are linear models that are typically regularized via well-known L1 or L2 terms in the likelihood objective, hence avoiding the need for the kinds of backoff or mixture weights used in smoothed ngram language models using Katz backoff [3] and similar techniques. Even though backoff cost is not required to regularize the model, we investigate the use of backoff features in MaxEnt models, as well as some backoff-inspired variants. These features are shown to improve model quality substantially, as shown in perplexity and word-error rate reductions, even in very large scale training scenarios of tens or hundreds of billions of words and hundreds of millions of features.
منابع مشابه
Statistical Models for the Analysis of Heterogeneous Biological Data Sets
STATISTICAL MODELS FOR THE ANALYSIS OF HETEROGENEOUS BIOLOGICAL DATA SETS Eugen Buehler Lyle Ungar The focus of this thesis is on developing methods of integrating heterogeneous biological feature sets into structured statistical models, so as to improve model predictions and further understanding of the complex systems that they emulate. Combining data from different sources is an important ta...
متن کاملEntropy-based Pruning of Backoff Language Models
A criterion for pruning parameters from N-gram backoff language models is developed, based on the relative entropy between the original and the pruned model. It is shown that the relative entropy resulting from pruning a single N-gram can be computed exactly and efficiently for backoff models. The relative entropy measure can be expressed as a relative change in training set perplexity. This le...
متن کاملMaximum Entropy Translation Model in Dependency-Based MT Framework
Maximum Entropy Principle has been used successfully in various NLP tasks. In this paper we propose a forward translation model consisting of a set of maximum entropy classifiers: a separate classifier is trained for each (sufficiently frequent) source-side lemma. In this way the estimates of translation probabilities can be sensitive to a large number of features derived from the source senten...
متن کاملStatistical Analysis of Multilingual Text Corpus and Development of Language Models
This paper presents two studies, first a statistical analysis for three languages i.e. Hindi, Punjabi and Nepali and the other, development of language models for three Indian languages i.e. Indian English, Punjabi and Nepali. The main objective of this study is to find distinction among these languages and development of language models for their identification. Detailed statistical analysis h...
متن کاملLearning Structured Information in Natural Language Applications
Recent literature on text-tagging reported successful results by applying Maximum Entropy (ME) models. In general, ME taggers rely on carefully selected binary features, which try to capture discriminant information from the training data. This paper introduces a standard setting of binary features, inspired by the literature on named-entity recognition and text chunking, and derives correspond...
متن کامل